Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Language
Document Type
Year range
1.
J Real Time Image Process ; 19(3): 551-563, 2022.
Article in English | MEDLINE | ID: covidwho-1859096

ABSTRACT

COVID-19 is a virus, which is transmitted through small droplets during speech, sneezing, coughing, and mostly by inhalation between individuals in close contact. The pandemic is still ongoing and causes people to have an acute respiratory infection which has resulted in many deaths. The risks of COVID-19 spread can be eliminated by avoiding physical contact among people. This research proposes real-time AI platform for people detection, and social distancing classification of individuals based on thermal camera. YOLOv4-tiny is proposed in this research for object detection. It is a simple neural network architecture, which makes it suitable for low-cost embedded devices. The proposed model is a better option compared to other approaches for real-time detection. An algorithm is also implemented to monitor social distancing using a bird's-eye perspective. The proposed approach is applied to videos acquired through thermal cameras for people detection, social distancing classification, and at the same time measuring the skin temperature for the individuals. To tune up the proposed model for individual detection, the training stage is carried out by thermal images with various indoor and outdoor environments. The final prototype algorithm has been deployed in a low-cost Nvidia Jetson devices (Xavier and Jetson Nano) which are composed of fixed camera. The proposed approach is suitable for a surveillance system within sustainable smart cities for people detection, social distancing classification, and body temperature measurement. This will help the authorities to visualize the fulfillment of the individuals with social distancing and simultaneously monitoring their skin temperature.

2.
2021 IEEE International Conference on Big Data, Big Data 2021 ; : 899-908, 2021.
Article in English | Scopus | ID: covidwho-1730897

ABSTRACT

This paper studies an emerging and important problem of identifying misleading COVID-19 short videos where the misleading content is jointly expressed in the visual, audio, and textual content of videos. Existing solutions for misleading video detection mainly focus on the authenticity of videos or audios against AI algorithms (e.g., deepfake) or video manipulation, and are insufficient to address our problem where most videos are user-generated and intentionally edited. Two critical challenges exist in solving our problem: i) how to effectively extract information from the distractive and manipulated visual content in TikTok videos? ii) How to efficiently aggregate heterogeneous information across different modalities in short videos? To address the above challenges, we develop TikTec, a multimodal misinformation detection framework that explicitly exploits the captions to accurately capture the key information from the distractive video content, and effectively learns the composed misinformation that is jointly conveyed by the visual and audio content. We evaluate TikTec on a real-world COVID- 19 video dataset collected from TikTok. Evaluation results show that TikTec achieves significant performance gains compared to state-of-the-art baselines in accurately detecting misleading COVID-19 short videos. © 2021 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL